403 research outputs found

    Teleological computer graphics modeling

    Get PDF
    Summary form only give. Teleological modeling, a developing approach for creating abstractions and mathematical representations of physically realistic time-dependent objects, is described. In this approach, geometric constraint-properties, mechanical properties of objects, the parameters representing an object, and the control of the object are incorporated into a single conceptual framework. A teleological model incorporates time-dependent goals of behavior of purpose as the primary abstraction and representation of what the object is. A teleological implementation takes a geometrically incomplete specification of the motion, position, and shape of an object, and produces a geometrically complete description of the object's shape and behavior as a function of time. Teleological modeling techniques may be suitable for consideration in computer vision algorithms by extending the current notions about how to make mathematical representations of objects. Teleological descriptions can produce compact representations for many of the physically derivable quantities controlling the shapes, combining-operations, and constraints which govern the formation and motion of objects

    Superquadrics and Angle-Preserving Transformations

    Get PDF
    Over the past 20 years, a great deal of interest has developed in the use of computer graphics and numerical methods for three-dimensional design. Significant progress in geometric modeling is being made, predominantly for objects best represented by lists of edges, faces, and vertices. One long-term goal of this work is a unified mathematical formalism, to form the basis of an interactive and intuitive design environment in which designers can simulate three-dimensional scenes with shading and texture, produce usable design images, verify numerical machining-control commands, and set up finite-element meshwork for structural and dynamic analysis. A new collection of smooth parametric objects and a new set of three-dimensional parametric modifiers show potential for helping to achieve this goal. The superquadric primitives and angle-preserving transformations extend the traditional geometric primitives-quadric surfaces and parametric patches-used in existing design packages, producing a new spectrum of flexible forms. Their chief advantage is that they allow complex solids and surfaces to be constructed and altered easily from a few interactive parameters

    Oriented tensor reconstruction: tracing neural pathways from diffusion tensor MRI

    Get PDF
    In this paper we develop a new technique for tracing anatomical fibers from 3D tensor fields. The technique extracts salient tensor features using a local regularization technique that allows the algorithm to cross noisy regions and bridge gaps in the data. We applied the method to human brain DT-MRI data and recovered identifiable anatomical structures that correspond to the white matter brain-fiber pathways. The images in this paper are derived from a dataset having 121x88x60 resolution. We were able to recover fibers with less than the voxel size resolution by applying the regularization technique, i.e., using a priori assumptions about fiber smoothness. The regularization procedure is done through a moving least squares filter directly incorporated in the tracing algorithm

    Faster Calculation of Superquadric Shapes

    Get PDF
    Nonparametric methods of calculating points on the curve produce the recently introduced superquadric objects at great savings in time

    Classification of Material Mixtures in Volume Data for Visualization and Modeling

    Get PDF
    Material classification is a key stop in creating computer graphics models and images from volume data, We present a new algorithm for identifying the distribution of different material types in volumetric datasets such as those produced with Magnetic Resonance Imaging (NMI) or Computed Tomography (CT). The algorithm assumes that voxels can contain more than one material, e.g. both muscle and fat; we wish to compute the relative proportion of each material in the voxels. Other classification methods have utilized Gaussian probability density functions to model the distribution of values within a dataset. These Gaussian basis functions work well for voxels with unmixed materials, but do not work well where the materials are mixed together. We extend this approach by deriving non-Gaussian "mixture" basis functions. We treat a voxel as a volume, not as a single point. We use the distribution of values within each voxel-sized volume to identify materials within the voxel using a probabilistic approach. The technique reduces the classification artifacts that occur along boundaries between materials. The technique is useful for making higher quality geometric models and renderings from volume data, and has the potential to make more accurate volume measurements. It also classifies noisy, low-resolution data well

    Parsimonious information technologies for pixels, perception, wetware and simulation: issues for Petrasek's global virtual hospital system

    Get PDF
    New types of "engaging" embedded systems and devices will greatly assist future medical care, as for Petrasek's envisioned Global Virtual Hospital System. The most effective devices will need to be designed in a "parsimonious" way for their economic use of energy, digital bits, communication time, and in terms of trading more expensive physical structures for less expensive computational ones. At the technological level, each device needs a carefully selected "matched set" of technological tradeoffs between the particular medical and user ends and means. The matched set of choices would carefully make sure that the device "methods" and implementations lead reliably to the device "goals" and purposes. In addition, however, there is a critical user-oriented aspect where the devices will also need to utilize highly "engaging environments" that are not too cumbersome or too tiring to use. People are becoming increasingly sophisticated with regard to the interactive requirements they have for their devices, from their experience with digital media, iPhones, video computer games and other types of environments that "engage" a person's attention for long periods of time, and without annoying delays and frustrations. It is an absolute requirement that the devices incorporate highly engaging environments so that using them does not tire the user or cause unnecessary medical errors and delays. This improved type of portable device, scanners, services and information methods would efficiently and more accurately gather sufficiently detailed medical information from the patient's body, help relay sufficient parts of the patient information electronically to a worldwide net of physicians and relay appropriate results and prescriptions back to the patient

    Partial-volume Bayesian classification of material mixtures in MR volume data using voxel histograms

    Get PDF
    The authors present a new algorithm for identifying the distribution of different material types in volumetric datasets such as those produced with magnetic resonance imaging (MRI) or computed tomography (CT). Because the authors allow for mixtures of materials and treat voxels as regions, their technique reduces errors that other classification techniques can create along boundaries between materials and is particularly useful for creating accurate geometric models and renderings from volume data. It also has the potential to make volume measurements more accurately and classifies noisy, low-resolution data well. There are two unusual aspects to the authors' approach. First, they assume that, due to partial-volume effects, or blurring, voxels can contain more than one material, e.g., both muscle and fat; the authors compute the relative proportion of each material in the voxels. Second, they incorporate information from neighboring voxels into the classification process by reconstructing a continuous function, ρ(x), from the samples and then looking at the distribution of values that ρ(x) takes on within the region of a voxel. This distribution of values is represented by a histogram taken over the region of the voxel; the mixture of materials that those values measure is identified within the voxel using a probabilistic Bayesian approach that matches the histogram by finding the mixture of materials within each voxel most likely to have created the histogram. The size of regions that the authors classify is chosen to match the sparing of the samples because the spacing is intrinsically related to the minimum feature size that the reconstructed continuous function can represent

    Pure phase-encoded MRI and classification of solids

    Get PDF
    Here, the authors combine a pure phase-encoded magnetic resonance imaging (MRI) method with a new tissue-classification technique to make geometric models of a human tooth. They demonstrate the feasibility of three-dimensional imaging of solids using a conventional 11.7-T NMR spectrometer. In solid-state imaging, confounding line-broadening effects are typically eliminated using coherent averaging methods. Instead, the authors circumvent them by detecting the proton signal at a fixed phase-encode time following the radio-frequency excitation. By a judicious choice of the phase-encode time in the MRI protocol, the authors differentiate enamel and dentine sufficiently to successfully apply a new classification algorithm. This tissue-classification algorithm identifies the distribution of different material types, such as enamel and dentine, in volumetric data. In this algorithm, the authors treat a voxel as a volume, not as a single point, and assume that each voxel may contain more than one material. They use the distribution of MR image intensities within each voxel-sized volume to estimate the relative proportion of each material using a probabilistic approach. This combined approach, involving MRI and data classification, is directly applicable to bone imaging and hard-tissue contrast-based modeling of biological solids

    Constrained Differential Optimization

    Get PDF
    Many optimization models of neural networks need constraints to restrict the space of outputs to a subspace which satisfies external criteria. Optimizations using energy methods yield "forces" which act upon the state of the neural network. The penalty method, in which quadratic energy constraints are added to an existing optimization energy, has become popular recently, but is not guaranteed to satisfy the constraint conditions when there are other forces on the neural model or when there are multiple constraints. In this paper, we present the basic differential multiplier method (BDMM), which satisfies constraints exactly; we create forces which gradually apply the constraints over time, using "neurons" that estimate Lagrange multipliers. The basic differential multiplier method is a differential version of the method of multipliers from Numerical Analysis. We prove that the differential equations locally converge to a constrained minimum. Examples of applications of the differential method of multipliers include enforcing permutation codewords in the analog decoding problem and enforcing valid tours in the traveling salesman problem

    Dynamic Splines with Constraints for Animation

    Get PDF
    In this paper, we present a method for fast interpolation between animation keyframes that allows for automatic computer-generated "improvement" of the motion. Our technique is closely related to conventional animation techniques, and can be used easily in conjunction with them for fast improvements of "rough" animations or for interpolation to allow sparser keyframing. We apply our technique to construction of splines in quaternion space where we show 100-fold speed-ups over previous methods. We also discuss our experiences with animation of an articulated human-like figure. Features of the method include: (1) Development of new subdivision techniques based on the Euler-Lagrange differential equations for splines in quaternion space; (2) An intuitive and simple set of coefficients to optimize over which is different from the conventional Bspline coefficients; (3) Widespread use of unconstrained minimization as opposed to constrained optimization needed by many previous methods. This speeds up the algorithm significantly, while still maintaining keyframe constraints accurately
    corecore